33 research outputs found
Advanced physical modeling of SiOx resistive random access memories
We apply a three-dimensional (3D) physical simulator, coupling self-consistently stochastic kinetic Monte Carlo descriptions of ion and electron transport, to investigate switching in silicon-rich silica (SiOx) redox-based resistive random-access memory (RRAM) devices. We explain the intrinsic nature of resistance switching of the SiOx layer, and demonstrate the impact of self-heating effects and the initial vacancy distributions on switching. We also highlight the necessity of using 3D physical modelling to predict correctly the switching behavior. The simulation framework is useful for exploring the little-known physics of SiOx RRAMs and RRAM devices in general. This proves useful in achieving efficient device and circuit designs, in terms of performance, variability and reliability
Investigation of resistance switching in SiOx RRAM cells using a 3D multi-scale kinetic Monte Carlo simulator
We employ an advanced three-dimensional (3D) electro-thermal simulator to explore the physics and potential of oxide-based resistive random-access memory (RRAM) cells. The physical simulation model has been developed recently, and couples a kinetic Monte Carlo study of electron and ionic transport to the self-heating phenomenon while accounting carefully for the physics of vacancy generation and recombination, and trapping mechanisms. The simulation framework successfully captures resistance switching, including the electroforming, set and reset processes, by modeling the dynamics of conductive filaments in the 3D space. This work focuses on the promising yet less studied RRAM structures based on silicon-rich silica (SiOx) RRAMs. We explain the intrinsic nature of resistance switching of the SiOx layer, analyze the effect of self-heating on device performance, highlight the role of the initial vacancy distributions acting as precursors for switching, and also stress the importance of using 3D physics-based models to capture accurately the switching processes. The simulation work is backed by experimental studies. The simulator is useful for improving our understanding of the little-known physics of SiOx resistive memory devices, as well as other oxide-based RRAM systems (e.g. transition metal oxide RRAMs), offering design and optimization capabilities with regard to the reliability and variability of memory cells
Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing
Machine learning, particularly in the form of deep learning, has driven most
of the recent fundamental developments in artificial intelligence. Deep
learning is based on computational models that are, to a certain extent,
bio-inspired, as they rely on networks of connected simple computing units
operating in parallel. Deep learning has been successfully applied in areas
such as object/pattern recognition, speech and natural language processing,
self-driving vehicles, intelligent self-diagnostics tools, autonomous robots,
knowledgeable personal assistants, and monitoring. These successes have been
mostly supported by three factors: availability of vast amounts of data,
continuous growth in computing power, and algorithmic innovations. The
approaching demise of Moore's law, and the consequent expected modest
improvements in computing power that can be achieved by scaling, raise the
question of whether the described progress will be slowed or halted due to
hardware limitations. This paper reviews the case for a novel beyond CMOS
hardware technology, memristors, as a potential solution for the implementation
of power-efficient in-memory computing, deep learning accelerators, and spiking
neural networks. Central themes are the reliance on non-von-Neumann computing
architectures and the need for developing tailored learning and inference
algorithms. To argue that lessons from biology can be useful in providing
directions for further progress in artificial intelligence, we briefly discuss
an example based reservoir computing. We conclude the review by speculating on
the big picture view of future neuromorphic and brain-inspired computing
systems.Comment: Keywords: memristor, neuromorphic, AI, deep learning, spiking neural
networks, in-memory computin
Unipolar potentiation and depression in memristive devices utilising the subthreshold regime
We present a resistance switching device that exhibits analogue potentiation and depression of conductance under the same voltage polarity. This contrasts with previously studied devices that potentiate and depress under opposite polarities. We refer to this mode of operation as the subthreshold regime due to it occurring at voltage or current biases that are insufficient to produce discrete or non-volatile switching. This behaviour has the potential to reduce the complexity of neuronal and synaptic circuitry in neuromorphic computing by removing the need for voltage pulses of both positive and negative polarities. The characteristically long timescales may also help replicate bio-realistic timings. In this paper, we detail how to induce this unique behaviour, how to tune its properties to a desired response, and finally, we demonstrate one potential application
Nonideality‐Aware Training for Accurate and Robust Low‐Power Memristive Neural Networks
Recent years have seen a rapid rise of artificial neural networks being
employed in a number of cognitive tasks. The ever-increasing computing
requirements of these structures have contributed to a desire for novel
technologies and paradigms, including memristor-based hardware accelerators.
Solutions based on memristive crossbars and analog data processing promise to
improve the overall energy efficiency. However, memristor nonidealities can
lead to the degradation of neural network accuracy, while the attempts to
mitigate these negative effects often introduce design trade-offs, such as
those between power and reliability. In this work, we design nonideality-aware
training of memristor-based neural networks capable of dealing with the most
common device nonidealities. We demonstrate the feasibility of using
high-resistance devices that exhibit high - nonlinearity -- by analyzing
experimental data and employing nonideality-aware training, we estimate that
the energy efficiency of memristive vector-matrix multipliers is improved by
three orders of magnitude ( to $381\
\mathrm{TOPs}^{-1}\mathrm{W}^{-1}$) while maintaining similar accuracy. We show
that associating the parameters of neural networks with individual memristors
allows to bias these devices towards less conductive states through
regularization of the corresponding optimization problem, while modifying the
validation procedure leads to more reliable estimates of performance. We
demonstrate the universality and robustness of our approach when dealing with a
wide range of nonidealities
Thin-film design of amorphous hafnium oxide nanocomposites enabling strong interfacial resistive switching uniformity
A design concept of phase-separated amorphous nanocomposite thin films is presented that realizes interfacial resistive switching (RS) in hafnium oxide-based devices. The films are formed by incorporating an average of 7% Ba into hafnium oxide during pulsed laser deposition at temperatures ≤400°C. The added Ba prevents the films from crystallizing and leads to ∼20-nm-thin films consisting of an amorphous HfOx host matrix interspersed with ∼2-nm-wide, ∼5-to-10-nm-pitch Ba-rich amorphous nanocolumns penetrating approximately two-thirds through the films. This restricts the RS to an interfacial Schottky-like energy barrier whose magnitude is tuned by ionic migration under an applied electric field. Resulting devices achieve stable cycle-to-cycle, device-to-device, and sample-to-sample reproducibility with a measured switching endurance of ≥104 cycles for a memory window ≥10 at switching voltages of ±2 V. Each device can be set to multiple intermediate resistance states, which enables synaptic spike-timing-dependent plasticity. The presented concept unlocks additional design variables for RS devices
CMOS and memristive hardware for neuromorphic computing
The ever-increasing processing power demands of digital computers cannot continue to be fulfilled indefinitely unless there is a paradigm shift in computing. Neuromorphic computing, which takes inspiration from the highly parallel, low power, high speed, and noise-tolerant computing capabilities of the brain, may provide such a shift. To that end, various aspects of the brain, from its basic building blocks, such as neurons and synapses, to its massively parallel in-memory computing networks have been being studied by the huge neuroscience community. Concurrently, many researchers from across academia and industry have been studying materials, devices, circuits, and systems, to implement some of the functions of networks of neurons and synapses to develop bio-inspired (neuromorphic) computing platforms